GPU Computing in Julia

Biostat/Biomath M257

Author

Dr. Hua Zhou @ UCLA

Published

April 21, 2025

This lecture introduces GPU computing in Julia.

1 GPGPU

GPUs are ubiquitous in modern computers. Following are NVIDIA GPUs on today’s typical computer systems.

NVIDIA GPUs H100 PCIe RTX 6000 RTX 5000
H100 RTX 6000 RTX 5000
Computers servers, cluster desktop laptop
Server Desktop Laptop
Main usage scientific computing daily work, gaming daily work
Memory 80 GB 48 GB 16 GB
Memory bandwidth 2 TB/sec 960 GB/sec 576 GB/sec
Number of cores ??? ??? ???
Processor clock ??? GHz ??? GHz ??? GHz
Peak DP performance 26 TFLOPS ??? TFLOPS ??? TFLOPS
Peak SP performance 51 TFLOPS 91.1 TFLOPS 42.6 TFLOPS

2 GPU architecture vs CPU architecture

  • GPUs contain 1000s of processing cores on a single card; several cards can fit in a desktop PC

  • Each core carries out the same operations in parallel on different input data – single program, multiple data (SPMD) paradigm

  • Extremely high arithmetic intensity if one can transfer the data onto and results off of the processors quickly

i7 die Fermi die
Einstein Rain man

3 GPGPU in Julia

GPU support by Julia is under active development. Check JuliaGPU for currently available packages.

There are multiple paradigms to program GPU in Julia, depending on the specific hardware.

  • CUDA is an ecosystem exclusively for Nvidia GPUs. There are extensive CUDA libraries for scientific computing: CuBLAS, CuRAND, CuSparse, CuSolve, CuDNN, …

    The CUDA.jl package allows defining arrays on Nvidia GPUs and overloads many common operations.

  • The AMDGPU.jl package allows defining arrays on AMD GPUs and overloads many common operations.

  • The Metal.jl package allows defining arrays on Apple Silicon GPU and overloads many common operations.

    AppleAccelerate.jl wraps the macOS Accelerate framework, which provides high-performance libraries for linear algebra, signal processing, and image processing on Apple Silicon CPU. This is analog of MKL for Intel CPU.

  • The oneAPI.jl package allows defining arrays on Intel GPUs and overloads many common operations.

I’ll illustrate using Metal.jl on my MacBook Pro running MacOS Sequoia 15.4. It has Apple M2 chip with 38 GPU cores.

versioninfo()
Julia Version 1.11.5
Commit 760b2e5b739 (2025-04-14 06:53 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: 20 × 13th Gen Intel(R) Core(TM) i7-13800H
  WORD_SIZE: 64
  LLVM: libLLVM-16.0.6 (ORCJIT, goldmont)
Threads: 1 default, 0 interactive, 1 GC (on 20 virtual cores)

Load packages:

using Pkg

Pkg.activate(pwd())
Pkg.instantiate()
Pkg.status()
  Activating project at `~/2025spring/slides/09-juliagpu`
Status `~/2025spring/slides/09-juliagpu/Project.toml`

  [6e4b80f9] BenchmarkTools v1.6.0

  [052768ef] CUDA v5.7.3

  [bdcacae8] LoopVectorization v0.12.172

  [37e2e46d] LinearAlgebra v1.11.0

4 Query GPU devices in the system

using CUDA

CUDA.versioninfo()
CUDA runtime 12.8, artifact installation
CUDA driver 12.4
NVIDIA driver 553.5.0

CUDA libraries: 
- CUBLAS: 12.8.4
- CURAND: 10.3.9
- CUFFT: 11.3.3
- CUSOLVER: 11.7.3
- CUSPARSE: 12.5.8
- CUPTI: 2025.1.1 (API 26.0.0)
- NVML: 12.0.0+550.117

Julia packages: 
- CUDA: 5.7.3
- CUDA_Driver_jll: 0.12.1+1
- CUDA_Runtime_jll: 0.16.1+0

Toolchain:
- Julia: 1.11.5
- LLVM: 16.0.6

1 device:
  0: NVIDIA RTX 2000 Ada Generation Laptop GPU (sm_89, 173.738 MiB / 7.996 GiB available)

5 Transfer data between main memory and GPU

using Random
Random.seed!(257)

# generate SP data on CPU
x = rand(Float32, 3, 3)
# transfer data form CPU to GPU
xd = CuArray(x)
3×3 CuArray{Float32, 2, CUDA.DeviceMemory}:
 0.145793  0.939801  0.479926
 0.567772  0.577251  0.81655
 0.800538  0.38893   0.914135
# generate array on GPU directly
# yd = Metal.ones(3, 3)
yd = CuArray(ones(Float32, 3, 3))
3×3 CuArray{Float32, 2, CUDA.DeviceMemory}:
 1.0  1.0  1.0
 1.0  1.0  1.0
 1.0  1.0  1.0
# collect data from GPU to CPU
x = collect(xd)
3×3 Matrix{Float32}:
 0.145793  0.939801  0.479926
 0.567772  0.577251  0.81655
 0.800538  0.38893   0.914135

6 Linear algebra

using BenchmarkTools, LinearAlgebra, Random

Random.seed!(257)

n = 2^14
# on CPU
x = rand(Float32, n, n)
y = rand(Float32, n, n)
z = zeros(Float32, n, n)
# on GPU
xd = CuArray(x)
yd = CuArray(y)
zd = CuArray(z);

6.1 Dot product

# SP matrix dot product on CPU: tr(X'Y)
bm_cpu = @benchmark dot($x, $y)
BenchmarkTools.Trial: 24 samples with 1 evaluation per sample.
 Range (minmax):  120.225 ms297.097 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     216.021 ms                GC (median):    0.00%
 Time  (mean ± σ):   208.404 ms ±  45.054 ms   GC (mean ± σ):  0.00% ± 0.00%
                                  █                           
  ▇▁▁▁▇▁▁▁▁▁▁▇▇▁▇▁▁▇▁▁▇▁▇▁▁▇▁▁▇▁▁█▁█▁▇▇▁▁▁▁▇▁▁▇▁▁▁▁▇▁▁▁▁▇▁▁▁▇ ▁
  120 ms           Histogram: frequency by time          297 ms <
 Memory estimate: 0 bytes, allocs estimate: 0.
# SP matrix dot product on GPU: tr(X'Y)
# why are there allocations?
bm_gpu = @benchmark CUDA.@sync dot($xd, $yd)
BenchmarkTools.Trial: 471 samples with 1 evaluation per sample.
 Range (minmax):  10.163 ms 13.445 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     10.518 ms                GC (median):    0.00%
 Time  (mean ± σ):   10.612 ms ± 422.558 μs   GC (mean ± σ):  0.00% ± 0.00%
     ▁▆█▆                                                    
  ▃▃▅████▇▅▅▄▃▄▃▃▃▂▂▃▂▃▂▁▂▂▂▁▂▁▂▁▂▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▂ ▃
  10.2 ms         Histogram: frequency by time         13.1 ms <
 Memory estimate: 1.23 KiB, allocs estimate: 58.
# speedup on GPU over CPU
median(bm_cpu.times) / median(bm_gpu.times)
20.538494061907674

6.2 Broadcast

# SP broadcast on CPU: z .= x .* y
bm_cpu = @benchmark $z .= $x .* $y
BenchmarkTools.Trial: 24 samples with 1 evaluation per sample.
 Range (minmax):  149.392 ms285.251 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     201.841 ms                GC (median):    0.00%
 Time  (mean ± σ):   211.201 ms ±  52.516 ms   GC (mean ± σ):  0.00% ± 0.00%
  █ ▁█▁▁▁▁▁   ▁  ▁               ▁    ▁ ▁    ▁   █        █▁▁ █  
  █▁███████▁▁▁█▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁█▁█▁▁▁▁█▁▁▁█▁▁▁▁▁▁▁▁███▁█ ▁
  149 ms           Histogram: frequency by time          285 ms <
 Memory estimate: 0 bytes, allocs estimate: 0.
# SP broadcast on GPU: z .= x .* y
# why is there allocation?
bm_gpu = @benchmark CUDA.@sync $zd .= $xd .* $yd
BenchmarkTools.Trial: 266 samples with 1 evaluation per sample.
 Range (minmax):  16.378 ms25.025 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     18.272 ms               GC (median):    0.00%
 Time  (mean ± σ):   18.769 ms ±  1.966 ms   GC (mean ± σ):  0.00% ± 0.00%
   ▄▅▁█▁ ▃▄▂   ▁ ▁                                            
  ▇█████▇███▆▆▇███▆▇▄▆▆▆▅▆▆▄▆▆▅▁▃▅▆▄▁▄▄▅▃▅▃▆▅▄▁▁▄▆▄▁▁▃▆▃▇▃▃ ▄
  16.4 ms         Histogram: frequency by time        23.5 ms <
 Memory estimate: 3.34 KiB, allocs estimate: 121.
# speedup
median(bm_cpu.times) / median(bm_gpu.times)
11.04645564257901

6.3 Matrix multiplication

# SP matrix multiplication on GPU
bm_gpu = @benchmark CUDA.@sync mul!($zd, $xd, $yd)
BenchmarkTools.Trial: 3 samples with 1 evaluation per sample.
 Range (minmax):  1.760 s   2.216 s   GC (min … max): 0.00% … 0.00%
 Time  (median):     1.880 s                GC (median):    0.00%
 Time  (mean ± σ):   1.952 s ± 236.160 ms   GC (mean ± σ):  0.00% ± 0.00%
   ▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
  1.76 s         Histogram: frequency by time         2.22 s <
 Memory estimate: 2.28 KiB, allocs estimate: 104.

For this problem size on this machine, we see GPU achieves a staggering 9 TFLOPS throughput with single precision!

# SP throughput on GPU
(2n^3) / (minimum(bm_gpu.times) / 1e9)
4.997960387919036e12
# SP matrix multiplication on CPU
bm_cpu = @benchmark mul!($z, $x, $y)
BenchmarkTools.Trial: 1 sample with 1 evaluation per sample.
 Single result which took 25.606 s (0.00% GC) to evaluate,
 with a memory estimate of 0 bytes, over 0 allocations.
# SP throughput on CPU
(2n^3) / (minimum(bm_cpu.times) / 1e9)
3.4351371519838055e11

We see >10x speedup by GPUs in this matrix multiplication example.

median(bm_cpu.times) / median(bm_gpu.times)
13.619347867332454

6.4 Cholesky

# cholesky on Gram matrix
# This one doesn't seem to work on Apple M2 chip yet
xtxd = xd'xd + I
bm_gpu = @benchmark CUDA.@sync cholesky($(xtxd))
bm_gpu
BenchmarkTools.Trial: 1 sample with 1 evaluation per sample.
 Single result which took 15.237 s (0.02% GC) to evaluate,
 with a memory estimate of 7.03 KiB, over 287 allocations.
xtx = collect(xtxd)
bm_cpu = @benchmark LinearAlgebra.cholesky($(Symmetric(xtx)))
bm_cpu
BenchmarkTools.Trial: 1 sample with 1 evaluation per sample.
 Single result which took 7.536 s (0.00% GC) to evaluate,
 with a memory estimate of 1.00 GiB, over 3 allocations.

We about 12x speedup of Cholesky by this NVIDIA GPU.

median(bm_cpu.times) / median(bm_gpu.times)
0.4945712003645284

7 Evaluation of elementary and special functions on GPU

7.1 Sine and log functions

# elementwise function on GPU arrays
fill!(yd, 1)
bm_gpu = @benchmark CUDA.@sync $zd .= log.($yd .+ sin.($xd))
bm_gpu
BenchmarkTools.Trial: 206 samples with 1 evaluation per sample.
 Range (minmax):  20.304 ms30.321 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     23.748 ms               GC (median):    0.00%
 Time  (mean ± σ):   24.227 ms ±  2.278 ms   GC (mean ± σ):  0.00% ± 0.00%
                 ▁▄▅█ ▁           ▂                            
  ▃▂▄▄▃▅▄▃▄▇▄▃▂▄▆████▆█▂▅▃▅▆▃▄▃▅█▇▄▂▃▃▄▄▃▃▄▂▇▃▃▂▂▄▃▂▃▂▄▂▆▄▃ ▃
  20.3 ms         Histogram: frequency by time          29 ms <
 Memory estimate: 3.34 KiB, allocs estimate: 121.
# elementwise function on CPU arrays
x, y, z = collect(xd), collect(yd), collect(zd)
bm_cpu = @benchmark $z .= log.($y .+ sin.($x))
bm_cpu
BenchmarkTools.Trial: 1 sample with 1 evaluation per sample.
 Single result which took 6.248 s (0.00% GC) to evaluate,
 with a memory estimate of 0 bytes, over 0 allocations.
# Speed up
median(bm_cpu.times) / median(bm_gpu.times)
263.10511991036014

GPU brings great speedup (>50x) to the massive evaluation of elementary math functions.

7.2 tanh function

bm_cpu = @benchmark z .= tanh.($x) # on CPU
bm_cpu
BenchmarkTools.Trial: 3 samples with 1 evaluation per sample.
 Range (minmax):  2.227 s  2.333 s   GC (min … max): 0.00% … 0.00%
 Time  (median):     2.245 s               GC (median):    0.00%
 Time  (mean ± σ):   2.268 s ± 56.895 ms   GC (mean ± σ):  0.00% ± 0.00%
   ▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
  2.23 s         Histogram: frequency by time        2.33 s <
 Memory estimate: 16 bytes, allocs estimate: 1.
bm_gpu = @benchmark zd .= CUDA.@sync tanh.($xd) # GPU
bm_gpu
BenchmarkTools.Trial: 154 samples with 1 evaluation per sample.
 Range (minmax):  22.425 ms97.954 ms   GC (min … max):  8.56% … 1.68%
 Time  (median):     30.644 ms               GC (median):    11.84%
 Time  (mean ± σ):   32.455 ms ±  8.055 ms   GC (mean ± σ):  10.90% ± 4.63%
                ▄ ▂▂   ▂█ ▂    ▃                         
  ▃▁▁▁▁▁▁▁▁▁▁▅▆▁███████████▆▅█▅▆▁▇▆█▅▅▇▃▆▃▇▁▆▅▅▃▆▅▁▅▅▃▁▁▁▅▃ ▃
  22.4 ms         Histogram: frequency by time        41.5 ms <
 Memory estimate: 5.39 KiB, allocs estimate: 187.

Metal.jl accelerates the evaluation of tanh function by

median(bm_cpu.times) / median(bm_gpu.times)
73.2540472089944